现在,人工智能(AI)可以自动解释医学图像以供临床使用。但是,AI在介入图像中的潜在用途(相对于参与分类或诊断的图像),例如在手术期间的指导,在很大程度上尚未开发。这是因为目前,使用现场分析对现场手术收集的数据进行了事后分析,这是因为手术AI系统具有基本和实际限制,包括道德考虑,费用,可扩展性,数据完整性以及缺乏地面真相。在这里,我们证明从人类模型中创建逼真的模拟图像是可行的替代方法,并与大规模的原位数据收集进行了补充。我们表明,对现实合成数据的训练AI图像分析模型,结合当代域的概括或适应技术,导致在实际数据上的模型与在精确匹配的真实数据训练集中训练的模型相当地执行的模型。由于从基于人类的模型尺度的合成生成培训数据,因此我们发现我们称为X射线图像分析的模型传输范式(我们称为Syntheex)甚至可以超越实际数据训练的模型,因为训练的有效性较大的数据集。我们证明了合成在三个临床任务上的潜力:髋关节图像分析,手术机器人工具检测和COVID-19肺病变分割。 Synthex提供了一个机会,可以极大地加速基于X射线药物的智能系统的概念,设计和评估。此外,模拟图像环境还提供了测试新颖仪器,设计互补手术方法的机会,并设想了改善结果,节省时间或减轻人为错误的新技术,从实时人类数据收集的道德和实际考虑方面摆脱了人为错误。
translated by 谷歌翻译
动作识别是通过广泛应用程序进行视频理解的重要任务。但是,开发有效的动作识别解决方案通常需要进行广泛的工程工作,以构建和测试模块及其超参数的不同组合。在此演示中,我们提出了Autovideo,这是一种用于自动视频动作识别的Python系统。Autovideo的特征是1)标准管道语言之后的高度模块化和可扩展的基础架构,2)管道构造的原始列表,3)数据驱动的调谐器来保存管道调整的努力,4)易于使用图形用户界面(GUI)。Autovideo在MIT许可证上发行,网址为https://github.com/datamllab/autovideo
translated by 谷歌翻译
Learning-based low rank approximation algorithms can significantly improve the performance of randomized low rank approximation with sketch matrix. With the learned value and fixed non-zero positions for sketch matrices from learning-based algorithms, these matrices can reduce the test error of low rank approximation significantly. However, there is still no good method to learn non-zero positions as well as overcome the out-of-distribution performance loss. In this work, we introduce two new methods Learning Sparsity and Learning Randomness which try to learn a better sparsity patterns and add randomness to the value of sketch matrix. These two methods can be applied with any learning-based algorithms which use sketch matrix directly. Our experiments show that these two methods can improve the performance of previous learning-based algorithm for both test error and out-of-distribution test error without adding too much complexity.
translated by 谷歌翻译
We present a robust, privacy-preserving visual localization algorithm using event cameras. While event cameras can potentially make robust localization due to high dynamic range and small motion blur, the sensors exhibit large domain gaps making it difficult to directly apply conventional image-based localization algorithms. To mitigate the gap, we propose applying event-to-image conversion prior to localization which leads to stable localization. In the privacy perspective, event cameras capture only a fraction of visual information compared to normal cameras, and thus can naturally hide sensitive visual details. To further enhance the privacy protection in our event-based pipeline, we introduce privacy protection at two levels, namely sensor and network level. Sensor level protection aims at hiding facial details with lightweight filtering while network level protection targets hiding the entire user's view in private scene applications using a novel neural network inference pipeline. Both levels of protection involve light-weight computation and incur only a small performance loss. We thus project our method to serve as a building block for practical location-based services using event cameras. The code and dataset will be made public through the following link: https://github.com/82magnolia/event_localization.
translated by 谷歌翻译
Current computer vision models, unlike the human visual system, cannot yet achieve general-purpose visual understanding. Existing efforts to create a general vision model are limited in the scope of assessed tasks and offer no overarching framework to perform them holistically. We present a new comprehensive benchmark, General-purpose Visual Understanding Evaluation (G-VUE), covering the full spectrum of visual cognitive abilities with four functional domains $\unicode{x2014}$ Perceive, Ground, Reason, and Act. The four domains are embodied in 11 carefully curated tasks, from 3D reconstruction to visual reasoning and manipulation. Along with the benchmark, we provide a general encoder-decoder framework to allow for the evaluation of arbitrary visual representation on all 11 tasks. We evaluate various pre-trained visual representations with our framework and observe that (1) Transformer-based visual backbone generally outperforms CNN-based backbone on G-VUE, (2) visual representations from vision-language pre-training are superior to those with vision-only pre-training across visual tasks. With G-VUE, we provide a holistic evaluation standard to motivate research toward building general-purpose visual systems via obtaining more general-purpose visual representations.
translated by 谷歌翻译
Visual commonsense understanding requires Vision Language (VL) models to not only understand image and text but also cross-reference in-between to fully integrate and achieve comprehension of the visual scene described. Recently, various approaches have been developed and have achieved high performance on visual commonsense benchmarks. However, it is unclear whether the models really understand the visual scene and underlying commonsense knowledge due to limited evaluation data resources. To provide an in-depth analysis, we present a Multimodal Evaluation (ME) pipeline to automatically generate question-answer pairs to test models' understanding of the visual scene, text, and related knowledge. We then take a step further to show that training with the ME data boosts the model's performance in standard VCR evaluation. Lastly, our in-depth analysis and comparison reveal interesting findings: (1) semantically low-level information can assist the learning of high-level information but not the opposite; (2) visual information is generally under utilization compared with text.
translated by 谷歌翻译
Various depth estimation models are now widely used on many mobile and IoT devices for image segmentation, bokeh effect rendering, object tracking and many other mobile tasks. Thus, it is very crucial to have efficient and accurate depth estimation models that can run fast on low-power mobile chipsets. In this Mobile AI challenge, the target was to develop deep learning-based single image depth estimation solutions that can show a real-time performance on IoT platforms and smartphones. For this, the participants used a large-scale RGB-to-depth dataset that was collected with the ZED stereo camera capable to generated depth maps for objects located at up to 50 meters. The runtime of all models was evaluated on the Raspberry Pi 4 platform, where the developed solutions were able to generate VGA resolution depth maps at up to 27 FPS while achieving high fidelity results. All models developed in the challenge are also compatible with any Android or Linux-based mobile devices, their detailed description is provided in this paper.
translated by 谷歌翻译
Language models (LMs) now excel at many tasks such as few-shot learning, question answering, reasoning, and dialog. However, they sometimes generate unsupported or misleading content. A user cannot easily determine whether their outputs are trustworthy or not, because most LMs do not have any built-in mechanism for attribution to external evidence. To enable attribution while still preserving all the powerful advantages of recent generation models, we propose RARR (Retrofit Attribution using Research and Revision), a system that 1) automatically finds attribution for the output of any text generation model and 2) post-edits the output to fix unsupported content while preserving the original output as much as possible. When applied to the output of several state-of-the-art LMs on a diverse set of generation tasks, we find that RARR significantly improves attribution while otherwise preserving the original input to a much greater degree than previously explored edit models. Furthermore, the implementation of RARR requires only a handful of training examples, a large language model, and standard web search.
translated by 谷歌翻译
准确的车辆类型分类在智能运输系统中起重要作用。对于统治者而言,重要的是要了解道路状况,通常为交通灯控制系统的贡献,以相应地响应以减轻交通拥堵。新技术和全面数据源,例如航空照片和遥感数据,提供了更丰富,高维的信息。同样,由于深度神经网络技术的快速发展,基于图像的车辆分类方法可以在处理数据时更好地提取基本的客观特征。最近,已经提出了几种深度学习模型来解决该问题。但是,基于纯卷积的传统方法对全球信息提取有限制,而复杂的环境(例如恶劣的天气)严重限制了识别能力。为了在复杂环境下提高车辆类型的分类能力,本研究提出了一种新型连接的卷积变压器在变压器神经网络(密度TNT)框架中,通过堆叠密集连接的卷积网络(Densenet)和变压器(TNT)(TNT)(TNT)(TNT )层。部署了三个区域的数据和四个不同的天气条件以评估识别能力。实验发现,即使在严重的雾气天气条件下,我们提出的车辆分类模型的识别能力也很少。
translated by 谷歌翻译
分布式机器学习实现可扩展性和计算卸载,但需要大量的通信。因此,分布式学习设置中的沟通效率是一个重要的考虑因素,尤其是当通信是无线且采用电池驱动设备时。在本文中,我们开发了一种基于审查的重球(CHB)方法,用于在服务器工作者体系结构中分布式学习。除非其本地梯度与先前传播的梯度完全不同,否则每个工人的自我审查员。 HB学习问题的显着实际优势是众所周知的,但是尚未解决降低通信的问题。 CHB充分利用HB平滑来消除报告的微小变化,并证明达到了与经典HB方法相当的线性收敛速率,以平滑和强烈凸出目标函数。 CHB的收敛保证在理论上是合理的,对于凸和非凸案。此外,我们证明,在某些情况下,至少可以消除所有通信的一半,而不会对收敛率产生任何影响。广泛的数值结果验证了CHB在合成和真实数据集(凸,非凸和非不同情况)上的通信效率。鉴于目标准确性,与现有算法相比,CHB可以显着减少通信数量,从而实现相同的精度而不减慢优化过程。
translated by 谷歌翻译